What is Salesforce AgentExchange?
Salesforce recently introduced AgentExchange, a new marketplace designed to help businesses build and monetize AI agents for tasks such as sales, HR, and finance. The goal is to use machine intelligence to automate tasks and deliver faster, more consistent workflows. However, an important question remains: under what conditions will these AI agents truly help workers, and when will they miss essential “human” knowledge?
I have two papers on this topic. In the Journal of Marketing Research, we studied how salespeople at a microfinance bank relied on “private” or intangible customer knowledge, which are the details that never made it into the firm’s official database. The major downside was a moral hazard: salespeople might sign up customers who looked profitable initially but defaulted later. However, the upside was that leveraging this hidden information could enhance long-term performance, as salespeople could identify riskier clients and put special effort into securing repayments over time.
In Management Science, we investigated how an agent’s private information interacts with the firm’s compensation plans. We found that if a compensation plan emphasized long-term profitability, “farmer” agents, who are skilled at customer maintenance, would focus on valuable upkeep tasks and use their private knowledge primarily to sign up clients who turned out to be profitable. The key takeaway was the alignment (or misalignment) of priorities between agents and the firm, which could benefit or harm the business. Together, these two papers show that both the presence of agents’ private knowledge and an effective compensation structure are crucial for ensuring that employees use their hidden insights to drive long-term profitability.
Why AI Agents May Overlook Critical Customer Insights
These lessons are highly relevant for AI tools like AgentExchange. Many AI systems focus on measurable variables (e.g., short-term success rates or credit scores) while overlooking the intangible insights human agents accumulate over months or years of managing customer relationships. By relying on hard data alone, the AI might overlook red flags that experienced employees can detect. Additionally, if a company’s bonus structure also prioritizes quick wins, an AI agent may amplify that behavior, for example, by pushing sales teams to pursue leads who fit the system’s “profitable” criteria but are, in reality, poor long-term prospects. Speeding up such flawed choices might reproduce or even exacerbate the moral hazard problems I observed in my research.
AgentExchange can be very effective if businesses also use pay plans that reward both quick wins and long-term outcomes. However, if compensation focuses too much on immediate numbers, and if managers overlook that workers often possess insights that AI cannot access, companies risk falling into the same moral hazard problems described in my research. Technology can boost efficiency, but it can also magnify errors when deeper human knowledge is missing or when data alone is incomplete.
A recent article in The Economist also underscores that AI does not help all workers equally. Across a series of studies, researchers found, in certain contexts, AI tools narrow the gap between high- and low-performing workers. For instance, in some tasks including customer chat, less-experienced employees often received a significant boost from AI, helping them catch up to seasoned experts. However, other studies showed that AI could widen performance gaps in more advanced tasks, e.g., maximizing profits, because top performers learned to interpret and leverage the AI’s outputs more effectively than others. These mixed results suggest that whether AI agents would help workers depends on factors such as task complexity and user skill level, in addition to the incentive structures I identified from my studies.
Balancing AI Automation with Employee Expertise
To conclude, businesses must balance human judgment and AI recommendations for using AI agents. By taking account of hidden, person-to-person knowledge that staff acquire, creating incentive plans that encourage careful, long-term decisions, and recognizing the different impacts AI can have on workers at various skill levels, leaders can ensure that AI remains a supportive tool instead of a driver of quick but risky wins, safeguarding the organization’s success over time.